Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Kevin Hall, a prominent nutrition expert who led influential studies on ultra-processed foods, has resigned from his long-held position at the National Institutes of Health, alleging censorship of his research by top aides of health secretary Robert F. Kennedy Jr.
In a post on LinkedIn, Hall claimed that he "experienced censorship in the reporting of our research because of agency concerns that it did not appear to fully support preconceived narratives of my agency’s leadership about ultra-processed food addiction."
In comments to CBS News, Hall said the censorship was over a study he and his colleagues recently published in the journal Cell Metabolism, which showed that ultra-processed foods did not produce the same large dopamine responses in the brain that are seen with use of addictive drugs. The finding suggests that the mechanism leading people to overconsume ultra-processed foods may be more complex than the studied mechanisms in addiction. This appears to slightly conflict with the beliefs of Kennedy Jr., who has claimed that food companies use additives to make ultra-processed foods addictive.
The study "just suggests that they may not be addictive by the typical mechanism that many drugs are addictive," Hall told CBS. "But even this bit of daylight between the preconceived narrative and our study was apparently too much," he said.
Hall claims that because of this, aides for Kennedy blocked him from being directly interviewed by New York Times reporters about the study. Instead, Hall was allowed to provide only written responses to the newspaper. However, Hall claims that Andrew Nixon, a spokesperson for Kennedy, then downplayed the study's results to the Times and edited Hall's written responses and sent them to the reporter without Hall's consent.
Further, Hall claims he was barred from presenting his research on ultra-processed foods at a conference and was forced to either edit a manuscript he had worked on with outside researchers or remove himself as a co-author.
An HHS spokesperson denied to CBS that Hall was censored or that his written responses to the Times were edited. "Any attempt to paint this as censorship is a deliberate distortion of the facts," a statement from the HHS said.
In response, Hall wrote to CBS, "I wonder how they define censorship?"
Hall said he had reached out to NIH leadership about his concerns in hopes it all was an "aberration" but never received a response.
"Without any reassurance there wouldn’t be continued censorship or meddling in our research, I felt compelled to accept early retirement to preserve health insurance for my family," he wrote in the LinkedIn post. "Due to very tight deadlines to make this decision, I don’t yet have plans for my future career."
There was a crash in a unit test run under the Test Authoring and Execution Framework (TAEF, pronounced like tafe). The crashing stack looked like this:
kernelbase!RaiseFailFastException unittests!wil::details::WilDynamicLoadRaiseFailFastException unittests!wil::details::WilRaiseFailFastException unittests!wil::details::WilFailFast unittests!wil::details::ReportFailure_NoReturn<3> unittests!wil::details::ReportFailure_Base<3,0> unittests!wil::details::ReportFailure_Hr<3> unittests!wil::details::in1diag3::_FailFast_Unexpected unittests!WidgetRouter::GetInstance unittests!winrt::Component::implementation::Widget::Close unittests!winrt::Component::implementation::Widget::~Widget unittests!winrt::impl::heap_implements<⟦...⟧>::`scalar deleting destructor' unittests!winrt::implements<⟦...⟧>::Release unittests!std::vector<winrt::Windows::Foundation::IClosable>::~vector unittests!std::vector<winrt::Windows::Foundation::IClosable>::clear unittests!WidgetRouter::~WidgetRouter unittests!WidgetRouter::GetInstance'::`2'::`dynamic atexit destructor for 'singleton'' ucrtbase!<lambda_⟦...⟧>::operator ucrtbase!__crt_seh_guarded_call<int>::operator()<<lambda_⟦...⟧>> ucrtbase!_execute_onexit_table ucrtbase!__crt_state_management::wrapped_invoke unittests!dllmain_crt_process_detach unittests!dllmain_dispatch ntdll!LdrpCallInitRoutine ntdll!LdrpProcessDetachNode ntdll!LdrpUnloadNode ntdll!LdrpDecrementModuleLoadCountEx ntdll!LdrUnloadDll kernelbase!FreeLibrary wex_common!TAEF::Common::Private::ExecutionPeFileData::`scalar deleting destructor' wex_common!TAEF::Common::PeFile::~PeFile te_loaders!WEX::TestExecution::NativeTestFileInstance::`scalar deleting destructor' te_host!<lambda_⟦...⟧>::Execute te_common!WEX::TestExecution::CommandThread::ExecuteCommandThread kernel32!BaseThreadInitThunk ntdll!RtlUserThreadStart
The team concluded that the destructor of the Widget
was running at the conclusion of this function:
void WidgetTests::BasicTests() { // Force the feature flag on for the duration of this test auto override = std::make_unique<FeatureOverride>(NewFeatureId, true); auto widget = winrt::Component::Widget(); ⟦ test the widget in various ways ⟧ // widget naturally destructs here }
Their conclusion was that the override was being released, and then the widget was destructing, resulting in an assertion failure in WidgetRouter::GetInstance
:
/* static method */ std::shared_ptr<WidgetRouter> WidgetRouter::GetInstance() { assert(FeatureFlags::IsEnabled(NewFeatureId)); static std::shared_ptr<WidgetRouter> singleton = std::make_shared<WidgetRouter>(); return singleton; }
The team theorized that perhaps there was a race condition between the release of the widget and the lifting of the override, and their proposed fix was to release the widget
explicitly while the override is still in scope.
void WidgetTests::BasicTests()
{
// Force the feature flag on for the duration of this test
auto override = std::make_unique<FeatureOverride>(NewFeatureId, true);
auto widget = winrt::Component::Widget();
⟦ test the widget in various ways ⟧
widget = nullptr;
}
I commented on the pull request (which had already been completed) that this change has no effect. The rules for C++ say that local variables are destructed in reverse order of construction, so the widget will naturally be released as part of its destruction, and only after that is finished will the override be released when it destructs.
The team replied that they did observe that the problem disappeared after they made their fix, but then it came back.
Exercise: Explain why the problem went away and then came back.
Answer to exercise: I suspected that they tested their fix in their team’s test environment, where the feature is already enabled. The fix works not because it fixed anything but because it was never crashing in their test environment in the first place. The defect tracker, however, doesn’t know that. It correctly reported that the bug was not being observed in any branches that had received the fix, which was initially only their own test branch. As the fixed merged into other branches, the bug was still not observed, until it finally merged into a branch where their feature was disabled. At that point, the override was actually doing something (changing a feature from disabled to enabled), and that’s when the crashes started coming in.
The post The case of the feature flag that didn’t stay on long enough, part 1 appeared first on The Old New Thing.
Some time ago, I discussed the classical model of linking and the fact that you can override a LIB with another LIB, and a LIB with an OBJ, but you can’t override an OBJ. I noted in passing that “This lets you override a symbol in a library by explicitly placing it an OBJ.”
An example where you can use this trick is in writing unit test hooks, as a form of global dependency injection.
// library/unittesthooks.h extern std::optional<int> UnitTestHook_Widget_GetValue(Widget* widget); // library/widget.cpp int Widget::GetValue() { auto hook = UnitTestHook_Widget_GetValue(this); if (hook) { return *hook; } ⟦ production code ⟧ } // library/unittesthookfallbacks.cpp std::optional<int> UnitTestHook_Widget_GetValue( Widget* [[maybe_unused]] widget) { return std::nullopt; } // unittests/unittest.cpp Widget* g_widgetToOverride; int g_overrideValue; std::optional<int> UnitTestHook_Widget_GetValue( Widget* [[maybe_unused]] widget) { if (widget == g_widgetToOverride) { return g_overrideValue; } return std::nullopt; } void TestWidgetValue() { Widget widget; // Force GetValue to return 42 g_widgetToOverride = std::addressof(widget); g_overrideValue = 42; // clean up the override when the test exits auto cleanup = wil::scope_exit([] { g_widgetToOverride = nullptr; }); // This should do the thing because we // made the value report as 42 DoTheThingIfValueIs42(widget); }
The idea here is that in production, the UnitTestHook_Widget_GetValue
function is provided by unittesthookfallbacks.cpp
, and that version always says “Don’t mind me, just go ahead and do your normal production code.” If you enable link-time code generation, the entire call will be optimized out, and the only code that will be generated is your production code.
It is important that the fallback be in a separate .cpp file (and therefore compile to a separate .obj file) so that it doesn’t get taken along for the ride. It is also important that the fallback be packaged in a library and not as a loose .obj, so that the unit test can provide its own implementation of the function.
In the unit test, the unit test provides its own version of the UnitTestHook_Widget_GetValue
function, and it checks whether the widget is the one being overridden, and if so it returns the overridden value. Otherwise, it allows the normal production code to run.
I like this form of global dependency injection because it means that the production code is completely devirtualized. You aren’t paying for virtual method calls on your injected interface, and once link time code generation kicks in, all of the unit testing hooks disappear, so your production code operates entire unencumbered.
(It also means that you can modify the code’s behavior in a unit test without having to do detouring. Detouring comes with its own problems, such as the inability to detour functions that have been inlined.)
Bonus chatter: To help out the link time code generator, the override function should return an object with no destructor, so that the compiler doesn’t have to construct an object (to represent the return value of the override function), then immediately destruct that value. Maybe the compiler can optimize out the constructor and destructor at sufficiently high optimization levels, but I like to avoid the problem entirely.
The post Using the classical model for linking to provide unit test overrides appeared first on The Old New Thing.
A critical resource that cybersecurity professionals worldwide rely on to identify, mitigate and fix security vulnerabilities in software and hardware is in danger of breaking down. The federally funded, non-profit research and development organization MITRE warned today that its contract to maintain the Common Vulnerabilities and Exposures (CVE) program — which is traditionally funded each year by the Department of Homeland Security — expires on April 16.
A letter from MITRE vice president Yosry Barsoum, warning that the funding for the CVE program will expire on April 16, 2025.
Tens of thousands of security flaws in software are found and reported every year, and these vulnerabilities are eventually assigned their own unique CVE tracking number (e.g. CVE-2024-43573, which is a Microsoft Windows bug that Redmond patched last year).
There are hundreds of organizations — known as CVE Numbering Authorities (CNAs) — that are authorized by MITRE to bestow these CVE numbers on newly reported flaws. Many of these CNAs are country and government-specific, or tied to individual software vendors or vulnerability disclosure platforms (a.k.a. bug bounty programs).
Put simply, MITRE is a critical, widely-used resource for centralizing and standardizing information on software vulnerabilities. That means the pipeline of information it supplies is plugged into an array of cybersecurity tools and services that help organizations identify and patch security holes — ideally before malware or malcontents can wriggle through them.
“What the CVE lists really provide is a standardized way to describe the severity of that defect, and a centralized repository listing which versions of which products are defective and need to be updated,” said Matt Tait, chief operating officer of Corellium, a cybersecurity firm that sells phone-virtualization software for finding security flaws.
In a letter sent today to the CVE board, MITRE Vice President Yosry Barsoum warned that on April 16, 2025, “the current contracting pathway for MITRE to develop, operate and modernize CVE and several other related programs will expire.”
“If a break in service were to occur, we anticipate multiple impacts to CVE, including deterioration of national vulnerability databases and advisories, tool vendors, incident response operations, and all manner of critical infrastructure,” Barsoum wrote.
MITRE told KrebsOnSecurity the CVE website listing vulnerabilities will remain up after the funding expires, but that new CVEs won’t be added after April 16.
A representation of how a vulnerability becomes a CVE, and how that information is consumed. Image: James Berthoty, Latio Tech, via LinkedIn.
DHS officials did not immediately respond to a request for comment. The program is funded through DHS’s Cybersecurity & Infrastructure Security Agency (CISA), which is currently facing deep budget and staffing cuts by the Trump administration.
Former CISA Director Jen Easterly said the CVE program is a bit like the Dewey Decimal System, but for cybersecurity.
“It’s the global catalog that helps everyone—security teams, software vendors, researchers, governments—organize and talk about vulnerabilities using the same reference system,” Easterly said in a post on LinkedIn. “Without it, everyone is using a different catalog or no catalog at all, no one knows if they’re talking about the same problem, defenders waste precious time figuring out what’s wrong, and worst of all, threat actors take advantage of the confusion.”
John Hammond, principal security researcher at the managed security firm Huntress, told Reuters he swore out loud when he heard the news that CVE’s funding was in jeopardy, and that losing the CVE program would be like losing “the language and lingo we used to address problems in cybersecurity.”
“I really can’t help but think this is just going to hurt,” said Hammond, who posted a Youtube video to vent about the situation and alert others.
Several people close to the matter told KrebsOnSecurity this is not the first time the CVE program’s budget has been left in funding limbo until the last minute. Barsoum’s letter, which was apparently leaked, sounded a hopeful note, saying the government is making “considerable efforts to continue MITRE’s role in support of the program.”
Tait said that without the CVE program, risk managers inside companies would need to continuously monitor many other places for information about new vulnerabilities that may jeopardize the security of their IT networks. Meaning, it may become more common that software updates get mis-prioritized, with companies having hackable software deployed for longer than they otherwise would, he said.
“Hopefully they will resolve this, but otherwise the list will rapidly fall out of date and stop being useful,” he said.